50 research outputs found

    Event detection based on generic characteristics of field-sports

    Get PDF
    In this paper, we propose a generic framework for event detection in broadcast video of multiple different field-sports. Features indicating significant events are selected, and robust detectors built. These features are rooted in generic characteristics common to all genres of field-sports. The evidence gathered by the feature detectors is combined by means of a support vector machine, which infers the occurrence of an event based on a model generated during a training phase. The system is tested across multiple genres of field-sports including soccer, rugby, hockey and Gaelic football and the results suggest that high event retrieval and content rejection statistics are achievable

    Event detection in field sports video using audio-visual features and a support vector machine

    Get PDF
    In this paper, we propose a novel audio-visual feature-based framework for event detection in broadcast video of multiple different field sports. Features indicating significant events are selected and robust detectors built. These features are rooted in characteristics common to all genres of field sports. The evidence gathered by the feature detectors is combined by means of a support vector machine, which infers the occurrence of an event based on a model generated during a training phase. The system is tested generically across multiple genres of field sports including soccer, rugby, hockey, and Gaelic football and the results suggest that high event retrieval and content rejection statistics are achievable

    Audiovisual processing for sports-video summarisation technology

    Get PDF
    In this thesis a novel audiovisual feature-based scheme is proposed for the automatic summarization of sports-video content The scope of operability of the scheme is designed to encompass the wide variety o f sports genres that come under the description ‘field-sports’. Given the assumption that, in terms of conveying the narrative of a field-sports-video, score-update events constitute the most significant moments, it is proposed that their detection should thus yield a favourable summarisation solution. To this end, a generic methodology is proposed for the automatic identification of score-update events in field-sports-video content. The scheme is based on the development of robust extractors for a set of critical features, which are shown to reliably indicate their locations. The evidence gathered by the feature extractors is combined and analysed using a Support Vector Machine (SVM), which performs the event detection process. An SVM is chosen on the basis that its underlying technology represents an implementation of the latest generation of machine learning algorithms, based on the recent advances in statistical learning. Effectively, an SVM offers a solution to optimising the classification performance of a decision hypothesis, inferred from a given set of training data. Via a learning phase that utilizes a 90-hour field-sports-video trainmg-corpus, the SVM infers a score-update event model by observing patterns in the extracted feature evidence. Using a similar but distinct 90-hour evaluation corpus, the effectiveness of this model is then tested genencally across multiple genres of fieldsports- video including soccer, rugby, field hockey, hurling, and Gaelic football. The results suggest that in terms o f the summarization task, both high event retrieval and content rejection statistics are achievable

    Audio/visual analysis for high-speed TV advertisement detection from MPEG bitstream

    Get PDF
    Advertisement breaks dunng or between television programmes are typically flagged by senes of black-and-silent video frames, which recurrendy occur in order to audio-visually separate individual advertisement spots from one another. It is the regular prevalence of these flags that enables automatic differentiauon between what is programme content and what is advertisement break. Detection of these audio-visual depressions within broadcast television content provides a basis on which advertisement detection may be achieved. This document reports on the progress made in the development of this idea into an advertisement detector system that automatically detects the advertisement breaks direcdy from the MPEG-1 encoded bitstream of digitally captured television broadcasts

    A framework for event detection in field-sports video broadcasts based on SVM generated audio-visual feature model. Case-study: soccer video

    Get PDF
    In this paper we propose a novel audio-visual feature-based framework, for event detection in field sports broadcast video. The system is evaluated via a case-study involving MPEG encoded soccer video. Specifically, the evidence gathered by various feature detectors is combined by means of a learning algorithm (a support vector machine), which infers the occurrence of an event, based on a model generated during a training phase, utilizing a corpus of 25 hours of content. The system is evaluated using 25 hours of separate test content. Following an evaluation of results obtained, it is shown for this case, that both high precision and recall statistics are achievable

    Audio processing for automatic TV sports program highlights detection

    Get PDF
    In today’s fast paced world, the time available to watch long sports programmes is decreasing, while the number of sports channels is rapidly increasing. Many viewers desire the facility to watch just the highlights of sports events. This paper presents a simple, but effective, method for generating sports video highlights summaries. Our method detects semantically important events in sports programmes by using the Scale Factors in the MPEG audio bitstream to generate an audio amplitude profile of the program. The Scale Factors for the subbands corresponding to the voice bandwidth give a strong indication of the level of commentator and/or spectator excitement. When periods of sustained high audio amplitude have been detected and ranked, the corresponding video shots may be concatenated to produce a summary of the program highlights. Our method uses only the Scale Factor information that is directly accessible from the MPEG bitstream, without any decoding, leading to highly efficient computation. It is also rather more generic than many existing techniques, being particularly suitable for the more popular sports televised in Ireland such as soccer, Gaelic football, hurling, rugby, horse racing and motor racing

    Audio and video processing for automatic TV advertisement detection

    Get PDF
    As a partner in the Centre for Digital Video Processing, the Visual Media Processing Group at Dublin City University conducts research and development in the area of digital video management. The current stage of development is demonstrated on our Web-based digital video system called Físchlár [1,2], which provides for efficient recording, analyzing, browsing and viewing of digitally captured television programmes. In order to make the browsing of programme material more efficient, users have requested the option of automatically deleting advertisement breaks. Our initial work on this task focused on locating ad-breaks by detecting patterns of silent black frames which separate individual advertisements and/or complete ad-breaks in most commercial TV stations. However, not all TV stations use silent, black frames to flag ad-breaks. We therefore decided to attempt to detect advertisements using the rate of shot cuts in the digitised TV signal. This paper describes the implementation and performance of both methods of ad-break detection

    InSPeCT: Integrated Surveillance for Port Container Traffic

    Get PDF
    This paper describes a fully-operational content-indexing and management system, designed for monitoring and profiling freight-based vehicular traffic in a seaport environment. The 'InSPeCT' system captures video footage of passing vehicles and uses tailored OCR to index the footage according to vehicle license plates and freight codes. In addition to real-time functionality such as alerting, the system provides advanced search techniques for the efficient retrieval of records, where each vehicle is profiled according to multi-angled video, context information, and links to external information sources. Currently being piloted at a busy national seaport, the feedback from port officials indicates the system to be extremely useful in supplementing their existing transportation-security structures

    A content-based retrieval system for UAV-like video and associated metadata

    Get PDF
    In this paper we provide an overview of a content-based retrieval (CBR) system that has been specifically designed for handling UAV video and associated meta-data. Our emphasis in designing this system is on managing large quantities of such information and providing intuitive and efficient access mechanisms to this content, rather than on analysis of the video content. The retrieval unit in our system is termed a "trip". At capture time, each trip consists of an MPEG-1 video stream and a set of time stamped GPS locations. An analysis process automatically selects and associates GPS locations with the video timeline. The indexed trip is then stored in a shared trip repository. The repository forms the backend of a MPEG-211 compliant Web 2.0 application for subsequent querying, browsing, annotation and video playback. The system interface allows users to search/browse across the entire archive of trips and, depending on their access rights, to annotate other users' trips with additional information. Interaction with the CBR system is via a novel interactive map-based interface. This interface supports content access by time, date, region of interest on the map, previously annotated specific locations of interest and combinations of these. To develop such a system and investigate its practical usefulness in real world scenarios, clearly a significant amount of appropriate data is required. In the absence of a large volume of UAV data with which to work, we have simulated UAV-like data using GPS tagged video content captured from moving vehicles

    MPEG audio bitstream processing towards the automatic generation of sports programme summaries

    Get PDF
    The frequency subband scale-factors are fundamental components of MPEG-1 audio encoded bitstreams. Examination of scale-factor weights is sufficient for the establishment of an audio amplitude profile of an audio track. If, for sports programme TV broadcasts, the audio amplitude is assumed to primarily reflect the noise level exhibited by the commentator (and/or attending spectators), then, this vocal reaction to the significance of unfolding events may be used as a basis for summarisation. i.e. by relying on the exhilaration, or otherwise, expressed by the commentator/spectators, individual clips of the programme (e.g. camera shots), may be ranked according to their relative significance. A summary may then be produced by amalgamating (chronologically) any number of these clips corresponding to selected audio peaks
    corecore